eXplainable and Reliable Against Adversarial Machine Learning in Data Analytics
نویسندگان
چکیده
Machine learning (ML) algorithms are nowadays widely adopted in different contexts to perform autonomous decisions and predictions. Due the high volume of data shared recent years, ML more accurate reliable since training testing phases precise. An important concept analyze when defining concerns adversarial machine attacks. These attacks aim create manipulated datasets mislead algorithm decisions. In this work, we propose new approaches able detect mitigate malicious against a system. particular, investigate Carlini-Wagner (CW), fast gradient sign method (FGSM) Jacobian based saliency map (JSMA) The work is exploit detection as countermeasures these Initially, performed some tests by using canonical with hyperparameters optimization improve metrics. Then, adopt original AI algorithms, either on eXplainable (Logic Learning Machine) or Support Vector Data Description (SVDD). obtained results show how classical may fail identify an attack, while methodologies prone correctly possible attack. evaluation proposed methodology was carried out terms good balance between FPR FNR real world application datasets: Domain Name System (DNS) tunneling, Vehicle Platooning Remaining Useful Life (RUL). addition, statistical analysis robustness trained models, including evaluating their performance runtime memory consumption.
منابع مشابه
Visual Analytics for Explainable Deep Learning
Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with such unprecedented advancements, the lack of explanation regarding the decisions made by deep learning models and absence of control over their internal processes act as major drawbacks in critical dec...
متن کاملDecision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class proba...
متن کاملData Distillation , Analytics , and Machine Learning
In this paper we first provide a brief overview on latent variables modeling methods for process data analytics and the related objectives to distill desirable components or features from a mixture of measured variables. These methods are then extended to modeling high dimensional time series data to extract the most dynamic latent variables one after another, which are referred to as principal...
متن کاملDecision-based Adversarial Attacks: Reliable Attacks against Black-box Machine Learning Models
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class proba...
متن کاملCognitive Analytics: Going Beyond Big Data Analytics and Machine Learning
This chapter defines analytics and traces its evolution from its origin in 1988 to its current stage—cognitive analytics. We discuss types of learning and describe classes of machine learning algorithms. Given this backdrop, we propose a reference architecture for cognitive analytics and indicate ways to implement the architecture. A few cognitive analytics applications are briefly described. T...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2022
ISSN: ['2169-3536']
DOI: https://doi.org/10.1109/access.2022.3197299